Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Processing-in-memory (PIM), where compute is moved closer to memory or data, has been explored to accelerate emerging workloads. Different PIM-based systems have been announced, each offering a unique microarchitectural organization of their compute units, ranging from fixed functional units to programmable general-purpose compute cores near memory. However, one fundamental limitation of PIM is that each compute unit can only access its local memory; access to “remote” memory must occur through the host CPU – potentially limiting application performance scalability. In this work, we first characterize the scalability of real PIM architectures using the UPMEM PIM system. We analyze how the overhead of communicating through the host (instead of providing direct communication between the PIM compute units) can become a bottleneck for collective communications that are commonly used in many workloads. To overcome this inter-PIM bank communication, we propose PIMnet – a PIM interconnection network for PIM banks that provides direct connectivity between compute units and removes the overhead of communicating through the host. PIMnet exploits bandwidth parallelism where communication across the different PIM bank/chips can occur in parallel to maximize communication performance. PIMnet also matches the DRAM packaging hierarchy with a multi-tier network architecture. Unlike traditional interconnection networks, PIMnet is a PIM controlled network where communication is managed by the PIM logic, optimizing collective communications and minimizing the hardware overhead of PIMnet. Our evaluation of PIMnet shows that it provides up to 85× speedup on collective communications and achieves a 11.8× improvement on real applications compared to the baseline PIM.more » « lessFree, publicly-accessible full text available March 1, 2026
-
Processing-in-memory (PIM), where the compute is moved closer to the memory or the data, has been widely explored to accelerate emerging workloads. Recently, different PIM-based systems have been announced by memory vendors to minimize data movement and improve performance as well as energy efficiency. One critical component of PIM is the large amount of compute parallelism provided across many PIM nodes'' or the compute units near the memory. In this work, we provide an extensive evaluation and analysis of real PIM systems based on UPMEM PIM. We show that while there are benefits of PIM, there are also scalability challenges and limitations as the number of PIM nodes increases. In particular, we show how collective communications that are commonly found in many kernels/workloads can be problematic for PIM systems. To evaluate the impact of collective communication in PIM architectures, we provide an in-depth analysis of two workloads on the UPMEM PIM system that utilize representative common collective communication patterns -- AllReduce and All-to-All communication. Specifically, we evaluate 1) embedding tables that are commonly used in recommendation systems that require AllReduce and 2) the Number Theoretic Transform (NTT) kernel which is a critical component of Fully Homomorphic Encryption (FHE) that requires All-to-All communication. We analyze the performance benefits of these workloads and show how they can be efficiently mapped to the PIM architecture through alternative data partitioning. However, since each PIM compute unit can only access its local memory, when communication is necessary between PIM nodes (or remote data is needed), communication between the compute units must be done through the host CPU, thereby severely hampering application performance. To increase the scalability (or applicability) of PIM to future workloads, we make the case for how future PIM architectures need efficient communication or interconnection networks between the PIM nodes that require both hardware and software support.more » « less
-
Epoxy-based polymer networks from step-growth polymerizations are ubiquitous in coatings, adhesives, and as matrices in composite materials. Dynamic covalent bonds in the network allow its degradation into small molecules and thus, enable chemical recycling; however, such degradation often requires elevated temperatures and costly chemicals, resulting in various small molecules. Here, we design crosslinked polyesters from structurally similar epoxy and anhydride monomers derived from phthalic acid. We achieve selective degradation of the polyesters through transesterification reactions at near-ambient conditions using an alkali carbonate catalyst, resulting in a singular phthalic ester. We also demonstrate upcycling the network polyesters to photopolymers by one-step depolymerization using a functional alcohol.more » « less
-
Abstract Climate change projections provided by global climate models (GCM) are generally too coarse for local and regional applications. Local and regional climate change impact studies therefore use downscaled datasets. While there are studies that evaluate downscaling methodologies, there is no study comparing the downscaled datasets that are actually distributed and used in climate change impact studies, and there is no guidance for selecting a published downscaled dataset. We compare five widely used statistically downscaled climate change projection datasets that cover the conterminous USA (CONUS): ClimateNA, LOCA, MACAv2-LIVNEH, MACAv2-METDATA, and NEX-DCP30. All of the datasets are derived from CMIP5 GCMs and are publicly distributed. The five datasets generally have good agreement across CONUS for Representative Concentration Pathways (RCP) 4.5 and 8.5, although the agreement among the datasets vary greatly depending on the GCM, and there are many localized areas of sharp disagreements. Areas of higher dataset disagreement emerge over time, and their importance relative to differences among GCMs is comparable between RCP4.5 and RCP8.5. Dataset disagreement displays distinct regional patterns, with greater disagreement in △Tmax and △Tmin in the interior West and in the North, and disagreement in △P in California and the Southeast. LOCA and ClimateNA are often the outlier dataset, while the seasonal timing of ClimateNA is somewhat shifted from the others. To easily identify regional study areas with high disagreement, we generated maps of dataset disagreement aggregated to states, ecoregions, watersheds, and forests. Climate change assessment studies can use the maps to evaluate and select one or more downscaled datasets for their study area.more » « less
-
Abstract Forests play a critical role in mitigating climate change, and, at the same time, are predicted to experience large-scale impacts of climate change that will affect the efficiency of forests in mitigation efforts. Projections of future carbon sequestration potential typically do not account for the changing economic costs of timber and agricultural production and land use change. We integrated a dynamic forward-looking economic optimization model of global land use with results from a dynamic global vegetation model and meta-analysis of climate impacts on crop yields to project future carbon sequestration in forests. We find that the direct impacts of climate change on forests, represented by changes in dieback and forest growth, and indirect effects due to lost crop productivity, together result in a net gain of 17 Gt C in aboveground forest carbon storage from 2000 to 2100. Increases in climate-driven forest growth rates will result in an 81%–99% reduction in costs of reaching a range of global forest carbon stock targets in 2100, while the increases in dieback rates are projected to raise the costs by 57%–132%. When combined, these two direct impacts are expected to reduce the global costs of climate change mitigation in forests by more than 70%. Inclusion of the third, indirect impact of climate change on forests through reduction in crop yields, and the resulting expansion of cropland, raises the costs by 11%–38% and widens the uncertainty range. While we cannot rule out the possibility of climate change increasing mitigation costs, the central outcomes of the simultaneous impacts of climate change on forests and agriculture are 64%–86% reductions in the mitigation costs. Overall, the results suggest that concerns about climate driven dieback in forests should not inhibit the ambitions of policy makers in expanding forest-based climate solutions.more » « less
An official website of the United States government
